Ordinal Classification
Ordinal classification, also called ordinal regression or ranking, deals with predicting categories that have a natural ordering or ranking, where the relative position between classes carries meaningful information. Unlike standard multi-class classification which treats classes as unordered, ordinal classification recognizes that some prediction errors are more severe than others based on distance between classes. Examples include customer satisfaction ratings (very unsatisfied, unsatisfied, neutral, satisfied, very satisfied), medical severity stages (mild, moderate, severe), credit ratings (AAA, AA, A, BBB, BB, B), or student grades (A, B, C, D, F).
The key distinction from standard classification is that order matters. Predicting "very satisfied" when the true label is "satisfied" is a smaller error than predicting "very unsatisfied." Standard multi-class methods treat all misclassifications equally, while ordinal methods should penalize predictions proportionally to their distance from the true class. The challenge is designing models and loss functions that respect this ordinal structure.
Threshold-based approaches model ordinal classification as learning cutoff points on a continuous latent scale. A single continuous score is predicted, then compared against learned thresholds to determine which ordinal category it falls into. This ensures predictions respect ordering (you can't skip categories in the latent space). Proportional odds models extend logistic regression by modeling cumulative probabilities P(Y ≤ k) for each category k, ensuring monotonicity.
Frank and Hall method decomposes the ordinal problem into K-1 binary classification tasks, where the k-th classifier predicts whether the instance belongs to class k or higher versus class k-1 or lower. The final prediction aggregates these binary decisions. Reduction to binary approaches transform ordinal classification into multiple binary problems while preserving order information through careful problem design.
Evaluation metrics should reflect ordinal structure. Mean Absolute Error (MAE) treats predictions as numbers and measures average distance from true class. Quadratic Weighted Kappa accounts for both agreement and severity of disagreement. Standard accuracy can be used but doesn't capture that being "one off" is better than being "far off." Ordinal versions of confusion matrices highlight whether errors tend to be adjacent categories or distant ones.
Popular Algorithms
- Ordinal Logistic Regression - Models cumulative probabilities ensuring ordered predictions; uses proportional odds assumption
- All-Threshold Method - Learns K-1 binary classifiers for cumulative probabilities P(Y ≤ k); aggregates to produce ordinal prediction
- Frank and Hall Method - Decomposes into binary tasks preserving ordinal information; uses ensemble of binary classifiers
- Support Vector Ordinal Regression - Adapts SVM to learn thresholds on continuous output respecting order constraints
- Ordinal Random Forest - Adapts random forest splitting criteria to account for ordinal structure of target variable
- https://github.com/ayrna/orca (ORCA framework for ordinal classification)
- Neural Networks with Ordinal Loss - Uses specialized loss functions (e.g., Earth Mover's Distance, unimodal distributions) that penalize based on distance between predicted and true classes